Goto

Collaborating Authors

 nullg 1




Appendix to " Adam with Bandit Sampling for Deep Learning "

Neural Information Processing Systems

According to Theorem 4. 1 in [1], the convergence rate of Adam is We prove Lemma 1 using the framework of online learning with bandit feedback. Let's consider a special case where It follows simply by plugging Lemma 3 into Theorem 2. In the main paper, we compared our method with Adam and Adam with importance sampling. In the main paper, we have shown the plots of loss value vs. wall clock time. Here, we include some plots of error rate vs. wall


that not all comments are replied here and our replies have to be short due to space, but they'll be fully addressed in a

Neural Information Processing Systems

We appreciate the valuable comments, which urged us to embody explicit connections to practices of learning. We plead a reconsideration based on the improvement, as our contribution is truly innovative and nontrivial. Re: connection to learning, and when Cond.1&2 hold. Fig.1 shows large LR again produces stochasticity as our paper studies. Cond.1&2 use auxiliary random variables to define the needed f Re: weaken isotropic noise assumption?


Gradient Surgery for Multi-Task Learning

Yu, Tianhe, Kumar, Saurabh, Gupta, Abhishek, Levine, Sergey, Hausman, Karol, Finn, Chelsea

arXiv.org Machine Learning

While deep learning and deep reinforcement learning (RL) systems have demonstrated impressive results in domains such as image classification, game playing, and robotic control, data efficiency remains a major challenge. Multi-task learning has emerged as a promising approach for sharing structure across multiple tasks to enable more efficient learning. However, the multi-task setting presents a number of optimization challenges, making it difficult to realize large efficiency gains compared to learning tasks independently. The reasons why multi-task learning is so challenging compared to single-task learning are not fully understood. In this work, we identify a set of three conditions of the multi-task optimization landscape that cause detrimental gradient interference, and develop a simple yet general approach for avoiding such interference between task gradients. We propose a form of gradient surgery that projects a task's gradient onto the normal plane of the gradient of any other task that has a conflicting gradient. On a series of challenging multi-task supervised and multi-task RL problems, this approach leads to substantial gains in efficiency and performance. Further, it is model-agnostic and can be combined with previously-proposed multi-task architectures for enhanced performance.